Goto

Collaborating Authors

 researcher find evidence


The Morning After: Researchers find evidence of organic matter on Mars

Engadget

The Perseverance Rover has found evidence of organic compounds in the Jezero Crater on Mars. Don't get too excited: These compounds could have also developed in nonbiological ways. But even if it's not proof of organic life on Mars, the results hint at complex organic conditions for the "key building blocks for life." Organic molecules like those observed in the Jezero Crater contain carbon and often hydrogen atoms. They're the core components of life as we know it on Earth.

  Country:
  Genre: Research Report > New Finding (0.37)

Researchers find evidence of racial, gender, and socioeconomic bias in chest X-ray classifiers

#artificialintelligence

Google and startups like Qure.ai, Aidoc, and DarwinAI are developing AI and machine learning systems that classify chest X-rays to help identify conditions like fractures, collapsed lungs, and fractures. Several hospitals including Mount Sinai have piloted computer vision algorithms that analyze scans from patients with the novel coronavirus. But research from the University of Toronto, the Vector Institute, and MIT reveals that chest X-ray datasets used to train diagnostic models exhibit imbalance, biasing them against certain gender, socioeconomic, and racial groups. Partly due to a reticence to release code, datasets, and techniques, much of the data used today to train AI algorithms for diagnosing diseases may perpetuate inequalities. A team of U.K. scientists found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries.


Researchers find evidence of bias in facial expression data sets

#artificialintelligence

Researchers claim the data sets often used to train AI systems to detect expressions like happiness, anger, and surprise are biased against certain demographic groups. In a preprint study published on Arxiv.org, Machine learning algorithms become biased in part because they're provided training samples that optimize their objectives toward majority groups. Unless explicitly modified, they perform worse for minority groups -- i.e., people represented by fewer samples. In domains like facial expression classification, it's difficult to compensate for skew because the training sets rarely contain information about attributes like race, gender, and age.


Researchers find evidence of bias in facial expression data sets

#artificialintelligence

Researchers claim the data sets often used to train AI systems to detect expressions like happiness, anger, and surprise are biased against certain demographic groups. In a preprint study published on Arxiv.org, Machine learning algorithms become biased in part because they're provided training samples that optimize their objectives toward majority groups. Unless explicitly modified, they perform worse for minority groups -- i.e., people represented by fewer samples. In domains like facial expression classification, it's difficult to compensate for skew because the training sets rarely contain information about attributes like race, gender, and age.